自我监督模型在机器学习(ML)中越来越普遍,因为它们减少了对昂贵标签数据的需求。由于它们在下游应用程序中的多功能性,它们越来越多地用作通过公共API暴露的服务。同时,由于它们输出的向量表示的高维度,这些编码器模型特别容易受到模型窃取攻击的影响。然而,编码器仍然没有防御:窃取攻击的现有缓解策略集中在监督学习上。我们介绍了一个新的数据集推理防御,该防御使用受害者编码器模型的私人培训集将其所有权归因于窃取的情况。直觉是,如果受害者从受害者那里窃取了编码器的培训数据,则在受害者的培训数据上,编码器的输出表示的对数可能比测试数据更高,但如果对其进行了独立培训,则不会。我们使用密度估计模型来计算该对数可能性。作为我们评估的一部分,我们还建议测量被盗编码器的保真度并量化盗窃检测的有效性,而无需涉及下游任务;相反,我们利用相互信息和距离测量值。我们在视觉领域中广泛的经验结果表明,数据集推断是捍卫自我监督模型免受模型窃取的有前途的方向。
translated by 谷歌翻译
缺乏精心校准的置信度估计值使神经网络在安全至关重要的领域(例如自动驾驶或医疗保健)中不足。在这些设置中,有能力放弃对分布(OOD)数据进行预测的能力,就像正确分类分布数据一样重要。我们介绍了$ P $ -DKNN,这是一种新颖的推理程序,该过程采用了经过训练的深神经网络,并分析了其中间隐藏表示形式的相似性结构,以计算与端到端模型预测相关的$ p $值。直觉是,在潜在表示方面执行的统计测试不仅可以用作分类器,还可以提供统计上有充分根据的不确定性估计。 $ P $ -DKNN是可扩展的,并利用隐藏层学到的表示形式的组成,这使深度表示学习成功。我们的理论分析基于Neyman-Pearson的分类,并将其与选择性分类的最新进展(拒绝选项)联系起来。我们证明了在放弃预测OOD输入和保持分布输入的高精度之间的有利权衡。我们发现,$ p $ -DKNN强迫自适应攻击者制作对抗性示例(一种最差的OOD输入形式),以对输入引入语义上有意义的更改。
translated by 谷歌翻译
选择性分类是拒绝模型将通过输入空间覆盖范围和模型准确性之间的权衡进行不正确预测的输入的任务。选择性分类的当前方法对模型架构或损耗函数施加约束;这在实践中抑制了它们的用法。与先前的工作相反,我们表明,只能通过研究模型的(离散)训练动力来实现最新的选择性分类性能。我们提出了一个通用框架,该框架对于给定的测试输入,监视指标,该指标与训练过程中获得的中间模型相对于最终预测标签的分歧;然后,我们拒绝在培训后期阶段表现出太多分歧的数据点。特别是,我们实例化了一种方法,该方法可以跟踪何时预测训练期间的标签停止与最终预测标签的意见。我们的实验评估表明,我们的方法在典型的选择性分类基准上实现了最先进的准确性/覆盖范围。
translated by 谷歌翻译
自我监督学习(SSL)是一个日益流行的ML范式,它训练模型以将复杂的输入转换为表示形式而不依赖于明确的标签。这些表示编码的相似性结构可以有效学习多个下游任务。最近,ML-AS-A-A-Service提供商已开始为推理API提供训练有素的SSL模型,该模型将用户输入转换为有用的费用表示。但是,训练这些模型及其对API的曝光涉及的高昂成本都使黑盒提取成为现实的安全威胁。因此,我们探索了对SSL的窃取攻击的模型。与输出标签的分类器上的传统模型提取不同,受害者模型在这里输出表示;与分类器的低维预测分数相比,这些表示的维度明显更高。我们构建了几次新颖的攻击,发现直接在受害者被盗的陈述上训练的方法是有效的,并且能够为下游模型高精度。然后,我们证明现有针对模型提取的防御能力不足,并且不容易改装为SSL的特异性。
translated by 谷歌翻译
Applying machine learning (ML) to sensitive domains requires privacy protection of the underlying training data through formal privacy frameworks, such as differential privacy (DP). Yet, usually, the privacy of the training data comes at the cost of the resulting ML models' utility. One reason for this is that DP uses one uniform privacy budget epsilon for all training data points, which has to align with the strictest privacy requirement encountered among all data holders. In practice, different data holders have different privacy requirements and data points of data holders with lower requirements can contribute more information to the training process of the ML models. To account for this need, we propose two novel methods based on the Private Aggregation of Teacher Ensembles (PATE) framework to support the training of ML models with individualized privacy guarantees. We formally describe the methods, provide a theoretical analysis of their privacy bounds, and experimentally evaluate their effect on the final model's utility using the MNIST, SVHN, and Adult income datasets. Our empirical results show that the individualized privacy methods yield ML models of higher accuracy than the non-individualized baseline. Thereby, we improve the privacy-utility trade-off in scenarios in which different data holders consent to contribute their sensitive data at different individual privacy levels.
translated by 谷歌翻译
在模型提取攻击中,对手可以通过反复查询并根据获得的预测来窃取通过公共API暴露的机器学习模型。为了防止模型窃取,现有的防御措施专注于检测恶意查询,截断或扭曲输出,因此必然会为合法用户引入鲁棒性和模型实用程序之间的权衡。取而代之的是,我们建议通过要求用户在阅读模型的预测之前完成工作证明来阻碍模型提取。这可以通过大大增加(甚至高达100倍)来阻止攻击者,以利用查询访问模型提取所需的计算工作。由于我们校准完成每个查询的工作证明所需的努力,因此这仅为常规用户(最多2倍)引入一个轻微的开销。为了实现这一目标,我们的校准应用了来自差异隐私的工具来衡量查询揭示的信息。我们的方法不需要对受害者模型进行任何修改,可以通过机器学习从业人员来应用其公开暴露的模型免于轻易被盗。
translated by 谷歌翻译
在联合学习(FL)中,数据不会在联合培训机器学习模型时留下个人设备。相反,这些设备与中央党(例如,公司)共享梯度。因为数据永远不会“离开”个人设备,因此FL作为隐私保留呈现。然而,最近显示这种保护是一个薄的外观,甚至是一种被动攻击者观察梯度可以重建各个用户的数据。在本文中,我们争辩说,事先工作仍然很大程度上低估了FL的脆弱性。这是因为事先努力专门考虑被动攻击者,这些攻击者是诚实但好奇的。相反,我们介绍了一个活跃和不诚实的攻击者,作为中央会,他们能够在用户计算模型渐变之前修改共享模型的权重。我们称之为修改的重量“陷阱重量”。我们的活跃攻击者能够完全恢复用户数据,并在接近零成本时:攻击不需要复杂的优化目标。相反,它利用了模型梯度的固有数据泄漏,并通过恶意改变共享模型的权重来放大这种效果。这些特异性使我们的攻击能够扩展到具有大型迷你批次数据的模型。如果来自现有工作的攻击者需要小时才能恢复单个数据点,我们的方法需要毫秒来捕获完全连接和卷积的深度神经网络的完整百分之批次数据。最后,我们考虑缓解。我们观察到,FL中的差异隐私(DP)的当前实现是有缺陷的,因为它们明确地信任中央会,并在增加DP噪音的关键任务,因此不提供对恶意中央党的保护。我们还考虑其他防御,并解释为什么它们类似地不足。它需要重新设计FL,为用户提供任何有意义的数据隐私。
translated by 谷歌翻译
A Digital Twin (DT) is a simulation of a physical system that provides information to make decisions that add economic, social or commercial value. The behaviour of a physical system changes over time, a DT must therefore be continually updated with data from the physical systems to reflect its changing behaviour. For resource-constrained systems, updating a DT is non-trivial because of challenges such as on-board learning and the off-board data transfer. This paper presents a framework for updating data-driven DTs of resource-constrained systems geared towards system health monitoring. The proposed solution consists of: (1) an on-board system running a light-weight DT allowing the prioritisation and parsimonious transfer of data generated by the physical system; and (2) off-board robust updating of the DT and detection of anomalous behaviours. Two case studies are considered using a production gas turbine engine system to demonstrate the digital representation accuracy for real-world, time-varying physical systems.
translated by 谷歌翻译
Deep neural networks (DNN) have outstanding performance in various applications. Despite numerous efforts of the research community, out-of-distribution (OOD) samples remain significant limitation of DNN classifiers. The ability to identify previously unseen inputs as novel is crucial in safety-critical applications such as self-driving cars, unmanned aerial vehicles and robots. Existing approaches to detect OOD samples treat a DNN as a black box and assess the confidence score of the output predictions. Unfortunately, this method frequently fails, because DNN are not trained to reduce their confidence for OOD inputs. In this work, we introduce a novel method for OOD detection. Our method is motivated by theoretical analysis of neuron activation patterns (NAP) in ReLU based architectures. The proposed method does not introduce high computational workload due to the binary representation of the activation patterns extracted from convolutional layers. The extensive empirical evaluation proves its high performance on various DNN architectures and seven image datasets. ion.
translated by 谷歌翻译
Recent advances in upper limb prostheses have led to significant improvements in the number of movements provided by the robotic limb. However, the method for controlling multiple degrees of freedom via user-generated signals remains challenging. To address this issue, various machine learning controllers have been developed to better predict movement intent. As these controllers become more intelligent and take on more autonomy in the system, the traditional approach of representing the human-machine interface as a human controlling a tool becomes limiting. One possible approach to improve the understanding of these interfaces is to model them as collaborative, multi-agent systems through the lens of joint action. The field of joint action has been commonly applied to two human partners who are trying to work jointly together to achieve a task, such as singing or moving a table together, by effecting coordinated change in their shared environment. In this work, we compare different prosthesis controllers (proportional electromyography with sequential switching, pattern recognition, and adaptive switching) in terms of how they present the hallmarks of joint action. The results of the comparison lead to a new perspective for understanding how existing myoelectric systems relate to each other, along with recommendations for how to improve these systems by increasing the collaborative communication between each partner.
translated by 谷歌翻译